Random active path model of deep neural networks with diluted binary synapses

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Diluted neural networks with adapting and correlated synapses.

We consider the dynamics of diluted neural networks with clipped and adapting synapses. Unlike previous studies, the learning rate is kept constant as the connectivity tends to infinity: the synapses evolve on a time scale intermediate between the quenched and annealing limits and all orders of synaptic correlations must be taken into account. The dynamics is solved by mean-field theory, the or...

متن کامل

Redundancy in active paths of deep networks: a random active path model

Deep learning has become a powerful and popular tool for a variety of machine learning tasks. However, it is extremely challenging to understand the mechanism of deep learning from a theoretical perspective. In this work, we study robustness of a deep network in its generalization capability against removal of a certain number of connections between layers. A critical value of this number is ob...

متن کامل

Binary Deep Neural Networks for Speech Recognition

Deep neural networks (DNNs) are widely used in most current automatic speech recognition (ASR) systems. To guarantee good recognition performance, DNNs usually require significant computational resources, which limits their application to low-power devices. Thus, it is appealing to reduce the computational cost while keeping the accuracy. In this work, in light of the success in image recogniti...

متن کامل

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications...

متن کامل

Path-SGD: Path-Normalized Optimization in Deep Neural Networks

We revisit the choice of SGD for training deep neural networks by reconsidering the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network, and suggest Path-SGD, which is an approximate steepest descent method with respect to a path-wise regularizer related to max-norm regularization. Path-S...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Physical Review E

سال: 2018

ISSN: 2470-0045,2470-0053

DOI: 10.1103/physreve.98.042311